3,625 research outputs found

    Theoretical, Measured and Subjective Responsibility in Aided Decision Making

    Full text link
    When humans interact with intelligent systems, their causal responsibility for outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task. They were aided by classification systems with different capabilities. We compared the predicted theoretical responsibility values to the actual measured responsibility participants took on and to their subjective rankings of responsibility. The model predictions were strongly correlated with both measured and subjective responsibility. A bias existed only when participants with poor classification capabilities relied less-than-optimally on a system that had superior classification capabilities and assumed higher-than-optimal responsibility. The study implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their comparative causal responsibility will be small, even if formally the human is assigned major roles. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes. The results demonstrate the descriptive value of the ResQu model to predict behavior and perceptions of responsibility by considering the characteristics of the human, the intelligent system, the environment and some systematic behavioral biases. The ResQu model is a new quantitative method that can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems

    The Responsibility Quantification (ResQu) Model of Human Interaction with Automation

    Full text link
    Intelligent systems and advanced automation are involved in information collection and evaluation, in decision-making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human casual responsibility is particularly important when intelligent autonomous systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWS). Using Information Theory, we develop a responsibility quantification (ResQu) model of human involvement in intelligent automated systems and demonstrate its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility to outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in intelligent systems and advanced automation. The current model is an initial step in the complex goal to create a comprehensive responsibility model, that will enable quantification of human causal responsibility. It assumes stationarity, full knowledge regarding the characteristic of the human and automation and ignores temporal aspects. Despite these limitations, it can aid in the analysis of systems designs alternatives and policy decisions regarding human responsibility in intelligent systems and advanced automation

    A Markov Chain Model Checker

    Get PDF
    Markov chains are widely used in the context of performance and reliability evaluation of systems of various nature. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both the discrete [17,6] and the continuous time setting [4,8]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen Twente Markov Chain Checker (EāŠ¢MC2(E \vdash MC^2), where properties are expressed in appropriate extensions of CTL. We illustrate the general bene ts of this approach and discuss the structure of the tool. Furthermore we report on first successful applications of the tool to non-trivial examples, highlighting lessons learned during development and application of (EāŠ¢MC2(E \vdash MC^2)

    Asymmetric effects of false positive and false negative indications on the verification of alerts in different risk conditions

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG gefƶrderten) Allianz- bzw. Nationallizenz frei zugƤnglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Indications from alerts or alarm systems can be the trigger for decisions, or they can elicit further information search. We report an experiment on the tendency to collect additional information after receiving system indications. We varied the proclivity of the alarm system towards false positive or false negative indications and the perceived risk of the situation. Results showed that false alarm-prone systems led to more frequent re-checking following both alarms and non-alarms in the high risk condition, whereas miss-prone systems led to high re-checking rates only for non-alarms, representing an asymmetry effect. Increasing the risk led to more re-checks with all alarm systems, but it had a stronger impact in the false alarm-prone condition. Results regarding the relation of risk and the asymmetry effect of false negative and false positive indications are discussed
    • ā€¦
    corecore